精确分割器官 - 危险(OARS)是优化放射治疗计划的先驱。现有的基于深度学习的多尺度融合体系结构已显示出2D医疗图像分割的巨大能力。他们成功的关键是汇总全球环境并保持高分辨率表示。但是,当转化为3D分割问题时,由于其大量的计算开销和大量数据饮食,现有的多尺度融合体系结构可能表现不佳。为了解决此问题,我们提出了一个新的OAR分割框架,称为Oarfocalfusenet,该框架融合了多尺度功能,并采用焦点调制来捕获多个尺度的全局本地上下文。每个分辨率流都具有来自不同分辨率量表的特征,并且多尺度信息汇总到模型多样化的上下文范围。结果,功能表示将进一步增强。在我们的实验设置中与OAR分割以及多器官分割的全面比较表明,我们提出的Oarfocalfusenet在公开可用的OpenKBP数据集和Synapse Multi-Organ细分方面的最新最新方法优于最新的最新方法。在标准评估指标方面,提出的两种方法(3D-MSF和Oarfocalfusenet)均表现出色。我们的最佳性能方法(Oarfocalfusenet)在OpenKBP数据集上获得的骰子系数为0.7995,Hausdorff的距离为5.1435,而Synapse Multi-Organ分段数据集则获得了0.8137的骰子系数。
translated by 谷歌翻译
Diffractive optical networks provide rich opportunities for visual computing tasks since the spatial information of a scene can be directly accessed by a diffractive processor without requiring any digital pre-processing steps. Here we present data class-specific transformations all-optically performed between the input and output fields-of-view (FOVs) of a diffractive network. The visual information of the objects is encoded into the amplitude (A), phase (P), or intensity (I) of the optical field at the input, which is all-optically processed by a data class-specific diffractive network. At the output, an image sensor-array directly measures the transformed patterns, all-optically encrypted using the transformation matrices pre-assigned to different data classes, i.e., a separate matrix for each data class. The original input images can be recovered by applying the correct decryption key (the inverse transformation) corresponding to the matching data class, while applying any other key will lead to loss of information. The class-specificity of these all-optical diffractive transformations creates opportunities where different keys can be distributed to different users; each user can only decode the acquired images of only one data class, serving multiple users in an all-optically encrypted manner. We numerically demonstrated all-optical class-specific transformations covering A-->A, I-->I, and P-->I transformations using various image datasets. We also experimentally validated the feasibility of this framework by fabricating a class-specific I-->I transformation diffractive network using two-photon polymerization and successfully tested it at 1550 nm wavelength. Data class-specific all-optical transformations provide a fast and energy-efficient method for image and data encryption, enhancing data security and privacy.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Recent large-scale image generation models such as Stable Diffusion have exhibited an impressive ability to generate fairly realistic images starting from a very simple text prompt. Could such models render real images obsolete for training image prediction models? In this paper, we answer part of this provocative question by questioning the need for real images when training models for ImageNet classification. More precisely, provided only with the class names that have been used to build the dataset, we explore the ability of Stable Diffusion to generate synthetic clones of ImageNet and measure how useful they are for training classification models from scratch. We show that with minimal and class-agnostic prompt engineering those ImageNet clones we denote as ImageNet-SD are able to close a large part of the gap between models produced by synthetic images and models trained with real images for the several standard classification benchmarks that we consider in this study. More importantly, we show that models trained on synthetic images exhibit strong generalization properties and perform on par with models trained on real data.
translated by 谷歌翻译
Multispectral imaging has been used for numerous applications in e.g., environmental monitoring, aerospace, defense, and biomedicine. Here, we present a diffractive optical network-based multispectral imaging system trained using deep learning to create a virtual spectral filter array at the output image field-of-view. This diffractive multispectral imager performs spatially-coherent imaging over a large spectrum, and at the same time, routes a pre-determined set of spectral channels onto an array of pixels at the output plane, converting a monochrome focal plane array or image sensor into a multispectral imaging device without any spectral filters or image recovery algorithms. Furthermore, the spectral responsivity of this diffractive multispectral imager is not sensitive to input polarization states. Through numerical simulations, we present different diffractive network designs that achieve snapshot multispectral imaging with 4, 9 and 16 unique spectral bands within the visible spectrum, based on passive spatially-structured diffractive surfaces, with a compact design that axially spans ~72 times the mean wavelength of the spectral band of interest. Moreover, we experimentally demonstrate a diffractive multispectral imager based on a 3D-printed diffractive network that creates at its output image plane a spatially-repeating virtual spectral filter array with 2x2=4 unique bands at terahertz spectrum. Due to their compact form factor and computation-free, power-efficient and polarization-insensitive forward operation, diffractive multispectral imagers can be transformative for various imaging and sensing applications and be used at different parts of the electromagnetic spectrum where high-density and wide-area multispectral pixel arrays are not widely available.
translated by 谷歌翻译
One of the main challenges in electroencephalogram (EEG) based brain-computer interface (BCI) systems is learning the subject/session invariant features to classify cognitive activities within an end-to-end discriminative setting. We propose a novel end-to-end machine learning pipeline, EEG-NeXt, which facilitates transfer learning by: i) aligning the EEG trials from different subjects in the Euclidean-space, ii) tailoring the techniques of deep learning for the scalograms of EEG signals to capture better frequency localization for low-frequency, longer-duration events, and iii) utilizing pretrained ConvNeXt (a modernized ResNet architecture which supersedes state-of-the-art (SOTA) image classification models) as the backbone network via adaptive finetuning. On publicly available datasets (Physionet Sleep Cassette and BNCI2014001) we benchmark our method against SOTA via cross-subject validation and demonstrate improved accuracy in cognitive activity classification along with better generalizability across cohorts.
translated by 谷歌翻译
A unidirectional imager would only permit image formation along one direction, from an input field-of-view (FOV) A to an output FOV B, and in the reverse path, the image formation would be blocked. Here, we report the first demonstration of unidirectional imagers, presenting polarization-insensitive and broadband unidirectional imaging based on successive diffractive layers that are linear and isotropic. These diffractive layers are optimized using deep learning and consist of hundreds of thousands of diffractive phase features, which collectively modulate the incoming fields and project an intensity image of the input onto an output FOV, while blocking the image formation in the reverse direction. After their deep learning-based training, the resulting diffractive layers are fabricated to form a unidirectional imager. As a reciprocal device, the diffractive unidirectional imager has asymmetric mode processing capabilities in the forward and backward directions, where the optical modes from B to A are selectively guided/scattered to miss the output FOV, whereas for the forward direction such modal losses are minimized, yielding an ideal imaging system between the input and output FOVs. Although trained using monochromatic illumination, the diffractive unidirectional imager maintains its functionality over a large spectral band and works under broadband illumination. We experimentally validated this unidirectional imager using terahertz radiation, very well matching our numerical results. Using the same deep learning-based design strategy, we also created a wavelength-selective unidirectional imager, where two unidirectional imaging operations, in reverse directions, are multiplexed through different illumination wavelengths. Diffractive unidirectional imaging using structured materials will have numerous applications in e.g., security, defense, telecommunications and privacy protection.
translated by 谷歌翻译
In computer-aided drug discovery (CADD), virtual screening (VS) is used for identifying the drug candidates that are most likely to bind to a molecular target in a large library of compounds. Most VS methods to date have focused on using canonical compound representations (e.g., SMILES strings, Morgan fingerprints) or generating alternative fingerprints of the compounds by training progressively more complex variational autoencoders (VAEs) and graph neural networks (GNNs). Although VAEs and GNNs led to significant improvements in VS performance, these methods suffer from reduced performance when scaling to large virtual compound datasets. The performance of these methods has shown only incremental improvements in the past few years. To address this problem, we developed a novel method using multiparameter persistence (MP) homology that produces topological fingerprints of the compounds as multidimensional vectors. Our primary contribution is framing the VS process as a new topology-based graph ranking problem by partitioning a compound into chemical substructures informed by the periodic properties of its atoms and extracting their persistent homology features at multiple resolution levels. We show that the margin loss fine-tuning of pretrained Triplet networks attains highly competitive results in differentiating between compounds in the embedding space and ranking their likelihood of becoming effective drug candidates. We further establish theoretical guarantees for the stability properties of our proposed MP signatures, and demonstrate that our models, enhanced by the MP signatures, outperform state-of-the-art methods on benchmark datasets by a wide and highly statistically significant margin (e.g., 93% gain for Cleves-Jain and 54% gain for DUD-E Diverse dataset).
translated by 谷歌翻译
过去的十年见证了深度学习在各种计算成像,传感和显微镜任务中的变革性应用。由于采用了有监督的学习方案,因此大多数方法取决于大规模,多样化和标记的培训数据。此类培训图像数据集的获取和准备通常很费力且昂贵,也导致对新样本类型的估计和概括有限。在这里,我们报告了一种称为Gedankennet的自制学习模型,该模型消除了对标签或实验培训数据的需求,并证明了其对全息图重建任务的有效性和卓越的概括。如果没有关于要成像的样本类型的先验知识,则使用物理矛盾的丢失和人为的随机图像进行了培训,这些模型是合成生成的,没有任何实验或与现实世界样本的相似之处。在其自制训练之后,Gedankennet成功概括为各种看不见的生物样品的实验全息图,并使用实验获得的测试全息图重建了不同类型对象的相位和振幅图像。 Gedankennet的自我监督学习实现了与Maxwell的方程相一致的复杂图像重建,无需访问实验数据或知识的真实样本或其空间特征的知识,就意味着其输出推理和对象解决方案准确地表示波传播,这实现了复杂的图像重建。在自由空间中。对图像重建任务的自我监督学习为全息,显微镜和计算成像领域的各种反问题打开了新的机会。
translated by 谷歌翻译
暴露于霉菌孢子和花粉等生物 - 大紫胶会导致不利的健康影响。需要一种便携式且具有成本效益的设备来长期监测和量化各种生物紫胶。为了满足这一需求,我们提出了一种移动性和成本效益的无标签生物透射剂传感器,该传感器拍摄了由虚拟撞击器集中的流动颗粒物的全息图像,该图像有选择地放慢速度,并引导颗粒大于6微米,以飞过大于6微米成像窗口。流动的颗粒被脉冲激光二极管照亮,在无镜头移动成像设备中的CMOS图像传感器上施放了其内联全息图。该照明包含三个短脉冲,在一个脉冲中流动粒子可以忽略不计,同一粒子的一式三份全息图记录在单个框架上,然后才退出成像视野视野,从而揭示了每个粒子的不同视角。虚拟撞击器中的颗粒通过差异检测方案进行定位,并且深层神经网络基于获得的全息图像,以无标签的方式对气溶胶类型进行了分类。我们使用不同类型的花粉(即,百慕大,榆树,橡树,松树,小麦和小麦)使用虚拟撞击器证明了这种移动生物 - 大气探测器的成功,并实现了92.91%的盲目分类精度。这种移动性和成本效益的设备重约700 g,可用于长时间对各种生物透气体的无标记感应和量化,因为它基于无弹药的虚拟撞击器,该虚拟撞击器不会捕获或固定颗粒物。
translated by 谷歌翻译